28 research outputs found

    The Kinetics of Volatile Lead Compound Formation During Simulated Hazardous Waste Incineration

    Get PDF
    Air pollution from fine metal-containing particles (and vapors) formed during hazardous waste incineration has attracted less attention than other incinerator emissions. Recently, however, metal pollution has become subject to more stringent regulations. The U.S. Environmental Protection Agency has announced limitations on the emissions of ten toxic metals. Pollution control systems that will effectively remove fine (submicron) metal-containing particles from flue gases are difficult to construct. Incinerators have not been designed or operated to minimize the formation of such particles. Industrial-scale incineration testing has produced anomalous results for air emissions of lead and other metals. To gain a better fundamental understanding of the generation of metal emissions, the kinetics of the formation of volatile pollutants in simulated incinerator kilns were experimentally examined as a function of temperature and acid gas concentration. The chemical reactions between lead oxide, a common toxic metal found in hazardous waste streams, and hydrogen chloride, commonly produced in hazardous waste incineration are the specific focus. Descriptive models were constructed of the kinetics and mass transport of the lead dichloride and oxychloride intermediate compounds that were found to be generated. Fine particulate matter composed of lead dichloride was produced. At temperatures between 260 and 310°C, the hydrochloridation reaction kinetics were measured, and an apparent activation energy of 22 kcal/mol was obtained. At 300°C and apparent reaction rate of 10-7 mol/(cm2.s) was found at 2000 ppm of HCl. At higher temperatures, the formation of lead dichloride was diffusionally controlled, principally in the liquid or solid ash phases, and, until 600°C was reached, changed little from the 10-7 mol/(cm2.s) rate in approximately 30- to 60-min batch hydrochloridation experiments. An interesting phenomenon was found in that at temperatures greater than 590°C, a glassy surface ash phase was formed while at temperatures between 450 and 590°C, distinctly liquid ash phase was formed. The glassy ash phase grew rapidly in thickness and exhibited a lower volatility of lead dichloride than did the liquid phase formed at lower temperatures. This has interesting implications for incineration because this research has revealed a situation in which a higher volatility product is formed at lower temperature, contrary to most expectations and to purely equilibrium considerations

    The Need for Medically Aware Video Compression in Gastroenterology

    Full text link
    Compression is essential to storing and transmitting medical videos, but the effect of compression on downstream medical tasks is often ignored. Furthermore, systems in practice rely on standard video codecs, which naively allocate bits between medically relevant frames or parts of frames. In this work, we present an empirical study of some deficiencies of classical codecs on gastroenterology videos, and motivate our ongoing work to train a learned compression model for colonoscopy videos. We show that two of the most common classical codecs, H264 and HEVC, compress medically relevant frames statistically significantly worse than medically nonrelevant ones, and that polyp detector performance degrades rapidly as compression increases. We explain how a learned compressor could allocate bits to important regions and allow detection performance to degrade more gracefully. Many of our proposed techniques generalize to medical video domains beyond gastroenterologyComment: Medical Imaging Meets NeurIPS Workshop 2022, NeurIPS 202

    Nonnegative subtheories and quasiprobability representations of qubits

    Full text link
    Negativity in a quasiprobability representation is typically interpreted as an indication of nonclassical behavior. However, this does not preclude states that are non-negative from exhibiting phenomena typically associated with quantum mechanics - the single qubit stabilizer states have non-negative Wigner functions and yet play a fundamental role in many quantum information tasks. We seek to determine what other sets of quantum states and measurements for a qubit can be non-negative in a quasiprobability representation, and to identify nontrivial unitary groups that permute the states in such a set. These sets of states and measurements are analogous to the single qubit stabilizer states. We show that no quasiprobability representation of a qubit can be non-negative for more than four bases and that the non-negative bases in any quasiprobability representation must satisfy certain symmetry constraints. We provide an exhaustive list of the sets of single qubit bases that are non-negative in some quasiprobability representation and are also permuted by a nontrivial unitary group. This list includes two families of three bases that both include the single qubit stabilizer states as a special case and a family of four bases whose symmetry group is the Pauli group. For higher dimensions, we prove that there can be no more than 2^{d^2} states in non-negative bases of a d-dimensional Hilbert space in any quasiprobability representation. Furthermore, these bases must satisfy certain symmetry constraints, corresponding to requiring the bases to be sufficiently complementary to each other.Comment: 17 pages, 8 figures, comments very welcome; v2 published version. Note that the statement and proof of Theorem III.2 in the published version are incorrect (an erratum has been submitted), and this arXiv version (v2) presents the corrected theorem and proof. The conclusions of the paper are unaffected by this correctio

    Full Resolution Image Compression with Recurrent Neural Networks

    Full text link
    This paper presents a set of full-resolution lossy image compression methods based on neural networks. Each of the architectures we describe can provide variable compression rates during deployment without requiring retraining of the network: each network need only be trained once. All of our architectures consist of a recurrent neural network (RNN)-based encoder and decoder, a binarizer, and a neural network for entropy coding. We compare RNN types (LSTM, associative LSTM) and introduce a new hybrid of GRU and ResNet. We also study "one-shot" versus additive reconstruction architectures and introduce a new scaled-additive framework. We compare to previous work, showing improvements of 4.3%-8.8% AUC (area under the rate-distortion curve), depending on the perceptual metric used. As far as we know, this is the first neural network architecture that is able to outperform JPEG at image compression across most bitrates on the rate-distortion curve on the Kodak dataset images, with and without the aid of entropy coding.Comment: Updated with content for CVPR and removed supplemental material to an external link for size limitation

    Universal Paralinguistic Speech Representations Using Self-Supervised Conformers

    Full text link
    Many speech applications require understanding aspects beyond the words being spoken, such as recognizing emotion, detecting whether the speaker is wearing a mask, or distinguishing real from synthetic speech. In this work, we introduce a new state-of-the-art paralinguistic representation derived from large-scale, fully self-supervised training of a 600M+ parameter Conformer-based architecture. We benchmark on a diverse set of speech tasks and demonstrate that simple linear classifiers trained on top of our time-averaged representation outperform nearly all previous results, in some cases by large margins. Our analyses of context-window size demonstrate that, surprisingly, 2 second context-windows achieve 96\% the performance of the Conformers that use the full long-term context on 7 out of 9 tasks. Furthermore, while the best per-task representations are extracted internally in the network, stable performance across several layers allows a single universal representation to reach near optimal performance on all tasks

    FRILL: A Non-Semantic Speech Embedding for Mobile Devices

    Full text link
    Learned speech representations can drastically improve performance on tasks with limited labeled data. However, due to their size and complexity, learned representations have limited utility in mobile settings where run-time performance can be a significant bottleneck. In this work, we propose a class of lightweight non-semantic speech embedding models that run efficiently on mobile devices based on the recently proposed TRILL speech embedding. We combine novel architectural modifications with existing speed-up techniques to create embedding models that are fast enough to run in real-time on a mobile device and exhibit minimal performance degradation on a benchmark of non-semantic speech tasks. One such model (FRILL) is 32x faster on a Pixel 1 smartphone and 40% the size of TRILL, with an average decrease in accuracy of only 2%. To our knowledge, FRILL is the highest-quality non-semantic embedding designed for use on mobile devices. Furthermore, we demonstrate that these representations are useful for mobile health tasks such as non-speech human sounds detection and face-masked speech detection. Our models and code are publicly available.Comment: Accepted to Interspeech 202

    Improved Lossy Image Compression with Priming and Spatially Adaptive Bit Rates for Recurrent Networks

    Full text link
    We propose a method for lossy image compression based on recurrent, convolutional neural networks that outperforms BPG (4:2:0 ), WebP, JPEG2000, and JPEG as measured by MS-SSIM. We introduce three improvements over previous research that lead to this state-of-the-art result. First, we show that training with a pixel-wise loss weighted by SSIM increases reconstruction quality according to several metrics. Second, we modify the recurrent architecture to improve spatial diffusion, which allows the network to more effectively capture and propagate image information through the network's hidden state. Finally, in addition to lossless entropy coding, we use a spatially adaptive bit allocation algorithm to more efficiently use the limited number of bits to encode visually complex image regions. We evaluate our method on the Kodak and Tecnick image sets and compare against standard codecs as well recently published methods based on deep neural networks
    corecore